Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 85 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, fairness, usability, and human-AI team performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
translated by 谷歌翻译
随着机器学习(ML)模型越来越多地被部署在高风险应用程序中,决策者提出了更严格的数据保护法规(例如GDPR,CCPA)。一个关键原则是``被遗忘的权利'',它使用户有权删除其数据。另一个关键原则是实现可操作的解释的权利,也称为算法追索权,使用户可以逆转不利的决定。迄今为止,尚不清楚这两个原则是否可以同时进行操作。因此,我们在数据删除请求的背景下介绍和研究追索权无效的问题。更具体地说,我们从理论上和经验上分析流行的最先进算法的行为,并证明如果这些算法产生的记录可能会无效,如果少数数据删除请求(例如1或2)保证书(例如1或2)预测模型的更新。对于线性模型和过度参数化的神经网络的设置 - 通过神经切线内核(NTK)进行了研究 - 我们建议一个框架来识别最小的关键训练点的最小值,当删除时,它将导致最大程度地提高其最大程度的分数。无效的回流。使用我们的框架,我们从经验上确定,从训练集中删除2个数据实例可以使流行的最先进算法最多无效所有回报的95%。因此,我们的工作提出了有关``被遗忘的权利''的背景下``可行解释权''的兼容性的基本问题。
translated by 谷歌翻译
对理解和分解学习的嵌入空间的兴趣正在增长。例如,最近基于概念的解释技术通过可解释的潜在组件分析机器学习模型。必须在模型的嵌入空间中发现此类组件,例如,通过独立的组件分析(ICA)或现代的分离学习技术。尽管这些无监督的方法提供了一个合理的正式框架,但它们要么需要访问数据生成功能,要么对数据分布(例如组件的独立性)施加严格的假设,而这些假设通常在实践中受到侵犯。在这项工作中,我们将视觉模型的概念解释性与解开学习和ICA联系起来。这使我们能够提供有关如何识别组件的第一个理论结果,而无需任何分配假设。从这些见解中,我们得出了与当前方法相比,它适用于更广泛的问题,但拥有正式的可识别性保证。在与组件分析和300多个最先进的分解模型的广泛比较中,即使在不同的分布和相关强度下,DA也稳定地保持了卓越的性能。
translated by 谷歌翻译
近年来提出了各种本地特征归因方法,后续工作提出了几种评估策略。为了评估不同归因技术的归因质量,在图像域中这些评估策略中最流行的是使用像素扰动。但是,最近的进步发现,不同的评估策略会产生归因方法的冲突排名,并且计算的昂贵。在这项工作中,我们介绍了基于像素扰动的评估策略的信息理论分析。我们的发现表明,与其实际值相比,通过删除像素的形状而不是信息泄漏的结果。使用我们的理论见解,我们提出了一个新的评估框架,称为“删除和Debias”(ROAD),该框架提供了两种贡献:首先,它减轻了混杂因素的影响,这需要在评估策略之间更高的一致性。其次,与最先进的时间相比,道路不需要计算昂贵的重新训练步骤,并节省了高达99%的计算成本。我们在https://github.com/tleemann/road_evaluation上发布源代码。
translated by 谷歌翻译
异构表格数据是最常用的数据形式,对于众多关键和计算要求的应用程序至关重要。在同质数据集上,深度神经网络反复显示出卓越的性能,因此被广泛采用。但是,它们适应了推理或数据生成任务的表格数据仍然具有挑战性。为了促进该领域的进一步进展,这项工作概述了表格数据的最新深度学习方法。我们将这些方法分为三组:数据转换,专业体系结构和正则化模型。对于每个小组,我们的工作提供了主要方法的全面概述。此外,我们讨论了生成表格数据的深度学习方法,并且还提供了有关解释对表格数据的深层模型的策略的概述。因此,我们的第一个贡献是解决上述领域中的主要研究流和现有方法,同时强调相关的挑战和开放研究问题。我们的第二个贡献是在传统的机器学习方法中提供经验比较,并在五个流行的现实世界中的十种深度学习方法中,具有不同规模和不同的学习目标的经验比较。我们已将作为竞争性基准公开提供的结果表明,基于梯度增强的树合奏的算法仍然大多在监督学习任务上超过了深度学习模型,这表明对表格数据的竞争性深度学习模型的研究进度停滞不前。据我们所知,这是对表格数据深度学习方法的第一个深入概述。因此,这项工作可以成为有价值的起点,以指导对使用表格数据深入学习感兴趣的研究人员和从业人员。
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译
In recent years, several metrics have been developed for evaluating group fairness of rankings. Given that these metrics were developed with different application contexts and ranking algorithms in mind, it is not straightforward which metric to choose for a given scenario. In this paper, we perform a comprehensive comparative analysis of existing group fairness metrics developed in the context of fair ranking. By virtue of their diverse application contexts, we argue that such a comparative analysis is not straightforward. Hence, we take an axiomatic approach whereby we design a set of thirteen properties for group fairness metrics that consider different ranking settings. A metric can then be selected depending on whether it satisfies all or a subset of these properties. We apply these properties on eleven existing group fairness metrics, and through both empirical and theoretical results we demonstrate that most of these metrics only satisfy a small subset of the proposed properties. These findings highlight limitations of existing metrics, and provide insights into how to evaluate and interpret different fairness metrics in practical deployment. The proposed properties can also assist practitioners in selecting appropriate metrics for evaluating fairness in a specific application.
translated by 谷歌翻译
In recent years distributional reinforcement learning has produced many state of the art results. Increasingly sample efficient Distributional algorithms for the discrete action domain have been developed over time that vary primarily in the way they parameterize their approximations of value distributions, and how they quantify the differences between those distributions. In this work we transfer three of the most well-known and successful of those algorithms (QR-DQN, IQN and FQF) to the continuous action domain by extending two powerful actor-critic algorithms (TD3 and SAC) with distributional critics. We investigate whether the relative performance of the methods for the discrete action space translates to the continuous case. To that end we compare them empirically on the pybullet implementations of a set of continuous control tasks. Our results indicate qualitative invariance regarding the number and placement of distributional atoms in the deterministic, continuous action setting.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
Heating in private households is a major contributor to the emissions generated today. Heat pumps are a promising alternative for heat generation and are a key technology in achieving our goals of the German energy transformation and to become less dependent on fossil fuels. Today, the majority of heat pumps in the field are controlled by a simple heating curve, which is a naive mapping of the current outdoor temperature to a control action. A more advanced control approach is model predictive control (MPC) which was applied in multiple research works to heat pump control. However, MPC is heavily dependent on the building model, which has several disadvantages. Motivated by this and by recent breakthroughs in the field, this work applies deep reinforcement learning (DRL) to heat pump control in a simulated environment. Through a comparison to MPC, it could be shown that it is possible to apply DRL in a model-free manner to achieve MPC-like performance. This work extends other works which have already applied DRL to building heating operation by performing an in-depth analysis of the learned control strategies and by giving a detailed comparison of the two state-of-the-art control methods.
translated by 谷歌翻译